222 research outputs found
Recommended from our members
Structured reporting of pelvic MRI leads to better treatment planning of uterine leiomyomas.
Over the years, the role of the radiologist within the multidisciplinary team has evolved remarkably, with imaging providing crucial information for patient management. Through close collaboration with referring clinicians, most radiology practices now strive for their radiology reports to provide the maximum value for individualized patient care [1]. Therefore, the development of structured radiology reports has gained impetus as an essential tool towards delivering personalized medicine. In fact, structured report templates provide a platform for potentially providing clear, concise, consistent and actionable reports that can assist the referring clinician in triaging the patient to appropriate treatment [1]. The key to adding value to radiology reporting lies in the disease-specific structured reports that are developed by radiologists in collaboration with the clinical management team. However, in the era of increasing workload, the balance between a succinct, generic structured report and a time-consuming disease-specific report is important
Precision radiogenomics: fusion biopsies to target tumour habitats in vivo.
Funder: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 766030, the Cancer Research UK Cambridge Institute with core grant C14303/A17197, and the Mark Foundation for Cancer Research and Cancer Research UK Cambridge Centre (C9685/A25177).High-grade serous ovarian cancer lesions display a high degree of heterogeneity on CT scans. We have recently shown that regions with distinct imaging profiles can be accurately biopsied in vivo using a technique based on the fusion of CT and ultrasound scans
Recommended from our members
Recent advances of HCI in decision-making tasks for optimized clinical workflows and precision medicine.
The ever-increasing amount of biomedical data is enabling new large-scale studies, even though ad hoc computational solutions are required. The most recent Machine Learning (ML) and Artificial Intelligence (AI) techniques have been achieving outstanding performance and an important impact in clinical research, aiming at precision medicine, as well as improving healthcare workflows. However, the inherent heterogeneity and uncertainty in the healthcare information sources pose new compelling challenges for clinicians in their decision-making tasks. Only the proper combination of AI and human intelligence capabilities, by explicitly taking into account effective and safe interaction paradigms, can permit the delivery of care that outperforms what either can do separately. Therefore, Human-Computer Interaction (HCI) plays a crucial role in the design of software oriented to decision-making in medicine. In this work, we systematically review and discuss several research fields strictly linked to HCI and clinical decision-making, by subdividing the articles into six themes, namely: Interfaces, Visualization, Electronic Health Records, Devices, Usability, and Clinical Decision Support Systems. However, these articles typically present overlaps among the themes, revealing that HCI inter-connects multiple topics. With the goal of focusing on HCI and design aspects, the articles under consideration were grouped into four clusters. The advances in AI can effectively support the physicians' cognitive processes, which certainly play a central role in decision-making tasks because the human mental behavior cannot be completely emulated and captured; the human mind might solve a complex problem even without a statistically significant amount of data by relying upon domain knowledge. For this reason, technology must focus on interactive solutions for supporting the physicians effectively in their daily activities, by exploiting their unique knowledge and evidence-based reasoning, as well as improving the various aspects highlighted in this review
Thoracic metastasis in advanced ovarian cancer: comparison between computed tomography and video-assisted thoracic surgery.
OBJECTIVE: To determine which computed tomography (CT) imaging features predict pleural malignancy in patients with advanced epithelial ovarian carcinoma (EOC) using video-assisted thoracic surgery (VATS), pathology, and cytology findings as the reference standard. METHODS: This retrospective study included 44 patients with International Federation of Obstetrics and Gynecology (FIGO) stage III or IV primary or recurrent EOC who had chest CT ≤30 days before VATS. Two radiologists independently reviewed the CT studies and recorded the presence and size of pleural effusions and of ascites; pleural nodules, thickening, enhancement, subdiaphragmatic tumour deposits and supradiaphragmatic, mediastinal, hilar, and retroperitoneal adenopathy; and peritoneal seeding. VATS, pathology, and cytology findings constituted the reference standard. RESULTS: In 26/44 (59%) patients, pleural biopsies were malignant. Only the size of left-sided pleural effusion (reader 1: rho=-0.39, p=0.01; reader 2: rho=-0.37, p=0.01) and presence of ascites (reader 1: rho=-0.33, p=0.03; reader 2: rho=-0.35, p=0.03) were significantly associated with solid pleural metastasis. Pleural fluid cytology was malignant in 26/35 (74%) patients. Only the presence (p=0.03 for both readers) and size (reader 1: rho=0.34, p=0.04; reader 2: rho=0.33, p=0.06) of right-sided pleural effusion were associated with malignant pleural effusion. Interobserver agreement was substantial (kappa=0.78) for effusion size and moderate (kappa=0.46) for presence of solid pleural disease. No other CT features were associated with malignancy at biopsy or cytology. CONCLUSION: In patients with advanced EOC, ascites and left-sided pleural effusion size were associated with solid pleural metastasis, while the presence and size of right-sided effusion were associated with malignant pleural effusion. No other CT features evaluated were associated with pleural malignancy
Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients.
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances
MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction.
BACKGROUND: Unsupervised learning can discover various unseen abnormalities, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a 2D/3D single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with either disease stages, various (i.e., more than two types of) diseases, or multi-sequence magnetic resonance imaging (MRI) scans. RESULTS: We propose unsupervised medical anomaly detection generative adversarial network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 [Formula: see text] loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average [Formula: see text] loss per scan discriminates them, comparing the ground truth/reconstructed slices. For training, we use two different datasets composed of 1133 healthy T1-weighted (T1) and 135 healthy contrast-enhanced T1 (T1c) brain MRI scans for detecting AD and brain metastases/various diseases, respectively. Our self-attention MADGAN can detect AD on T1 scans at a very early stage, mild cognitive impairment (MCI), with area under the curve (AUC) 0.727, and AD at a late stage with AUC 0.894, while detecting brain metastases on T1c scans with AUC 0.921. CONCLUSIONS: Similar to physicians' way of performing a diagnosis, using massive healthy training data, our first multiple MRI slice reconstruction approach, MADGAN, can reliably predict the next 3 slices from the previous 3 ones only for unseen healthy images. As the first unsupervised various disease diagnosis, MADGAN can reliably detect the accumulation of subtle anatomical anomalies and hyper-intense enhancing lesions, such as (especially late-stage) AD and brain metastases on multi-sequence MRI scans
Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation
Uncertainty quantification in automated image analysis is highly desired in
many applications. Typically, machine learning models in classification or
segmentation are only developed to provide binary answers; however, quantifying
the uncertainty of the models can play a critical role for example in active
learning or machine human interaction. Uncertainty quantification is especially
difficult when using deep learning-based models, which are the state-of-the-art
in many imaging applications. The current uncertainty quantification approaches
do not scale well in high-dimensional real-world problems. Scalable solutions
often rely on classical techniques, such as dropout, during inference or
training ensembles of identical models with different random seeds to obtain a
posterior distribution. In this paper, we show that these approaches fail to
approximate the classification probability. On the contrary, we propose a
scalable and intuitive framework to calibrate ensembles of deep learning models
to produce uncertainty quantification measurements that approximate the
classification probability. On unseen test data, we demonstrate improved
calibration, sensitivity (in two out of three cases) and precision when being
compared with the standard approaches. We further motivate the usage of our
method in active learning, creating pseudo-labels to learn from unlabeled
images and human-machine collaboration
Recommended from our members
Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering.
BACKGROUND: Cancer typically exhibits genotypic and phenotypic heterogeneity, which can have prognostic significance and influence therapy response. Computed Tomography (CT)-based radiomic approaches calculate quantitative features of tumour heterogeneity at a mesoscopic level, regardless of macroscopic areas of hypo-dense (i.e., cystic/necrotic), hyper-dense (i.e., calcified), or intermediately dense (i.e., soft tissue) portions. METHOD: With the goal of achieving the automated sub-segmentation of these three tissue types, we present here a two-stage computational framework based on unsupervised Fuzzy C-Means Clustering (FCM) techniques. No existing approach has specifically addressed this task so far. Our tissue-specific image sub-segmentation was tested on ovarian cancer (pelvic/ovarian and omental disease) and renal cell carcinoma CT datasets using both overlap-based and distance-based metrics for evaluation. RESULTS: On all tested sub-segmentation tasks, our two-stage segmentation approach outperformed conventional segmentation techniques: fixed multi-thresholding, the Otsu method, and automatic cluster number selection heuristics for the K-means clustering algorithm. In addition, experiments showed that the integration of the spatial information into the FCM algorithm generally achieves more accurate segmentation results, whilst the kernelised FCM versions are not beneficial. The best spatial FCM configuration achieved average Dice similarity coefficient values starting from 81.94±4.76 and 83.43±3.81 for hyper-dense and hypo-dense components, respectively, for the investigated sub-segmentation tasks. CONCLUSIONS: The proposed intelligent framework could be readily integrated into clinical research environments and provides robust tools for future radiomic biomarker validation
Recommended from our members
Machine Learning for COVID-19 Diagnosis and Prognostication: Lessons for Amplifying the Signal While Reducing the Noise.
Funder: Wellcome Trus
- …